130 research outputs found

    Quantum system characterization with limited resources

    Full text link
    The construction and operation of large scale quantum information devices presents a grand challenge. A major issue is the effective control of coherent evolution, which requires accurate knowledge of the system dynamics that may vary from device to device. We review strategies for obtaining such knowledge from minimal initial resources and in an efficient manner, and apply these to the problem of characterization of a qubit embedded into a larger state manifold, made tractable by exploiting prior structural knowledge. We also investigate adaptive sampling for estimation of multiple parameters

    Alfven wave scattering and the secondary to primary ratio

    Get PDF
    The cosmic ray abundances have traditionally been used to determine the elemental and isotopic nature of galactic ray sources and average measures of propagation conditions. Detailed studies of the physics of propagation are usually paired with relatively straightforward estimates of the secondary-to-primary (S/P) ratios. In the work reported here, calculations of elemental abundances are paired with a more careful treatment of the propagation process. It is shown that the physics of propagation does indeed leave specific traces of Galactic structure in cosmic ray abundances

    Rapid quantitative pharmacodynamic imaging with Bayesian estimation

    Get PDF
    We recently described rapid quantitative pharmacodynamic imaging, a novel method for estimating sensitivity of a biological system to a drug. We tested its accuracy in simulated biological signals with varying receptor sensitivity and varying levels of random noise, and presented initial proof-of-concept data from functional MRI (fMRI) studies in primate brain. However, the initial simulation testing used a simple iterative approach to estimate pharmacokinetic-pharmacodynamic (PKPD) parameters, an approach that was computationally efficient but returned parameters only from a small, discrete set of values chosen a priori. Here we revisit the simulation testing using a Bayesian method to estimate the PKPD parameters. This improved accuracy compared to our previous method, and noise without intentional signal was never interpreted as signal. We also reanalyze the fMRI proof-of-concept data. The success with the simulated data, and with the limited fMRI data, is a necessary first step toward further testing of rapid quantitative pharmacodynamic imaging

    Quantum System Identification by Bayesian Analysis of Noisy Data: Beyond Hamiltonian Tomography

    Full text link
    We consider how to characterize the dynamics of a quantum system from a restricted set of initial states and measurements using Bayesian analysis. Previous work has shown that Hamiltonian systems can be well estimated from analysis of noisy data. Here we show how to generalize this approach to systems with moderate dephasing in the eigenbasis of the Hamiltonian. We illustrate the process for a range of three-level quantum systems. The results suggest that the Bayesian estimation of the frequencies and dephasing rates is generally highly accurate and the main source of errors are errors in the reconstructed Hamiltonian basis.Comment: 6 pages, 3 figure

    Use and Abuse of the Fisher Information Matrix in the Assessment of Gravitational-Wave Parameter-Estimation Prospects

    Get PDF
    The Fisher-matrix formalism is used routinely in the literature on gravitational-wave detection to characterize the parameter-estimation performance of gravitational-wave measurements, given parametrized models of the waveforms, and assuming detector noise of known colored Gaussian distribution. Unfortunately, the Fisher matrix can be a poor predictor of the amount of information obtained from typical observations, especially for waveforms with several parameters and relatively low expected signal-to-noise ratios (SNR), or for waveforms depending weakly on one or more parameters, when their priors are not taken into proper consideration. In this paper I discuss these pitfalls; show how they occur, even for relatively strong signals, with a commonly used template family for binary-inspiral waveforms; and describe practical recipes to recognize them and cope with them. Specifically, I answer the following questions: (i) What is the significance of (quasi-)singular Fisher matrices, and how must we deal with them? (ii) When is it necessary to take into account prior probability distributions for the source parameters? (iii) When is the signal-to-noise ratio high enough to believe the Fisher-matrix result? In addition, I provide general expressions for the higher-order, beyond--Fisher-matrix terms in the 1/SNR expansions for the expected parameter accuracies.Comment: 24 pages, 3 figures, previously known as "A User Manual for the Fisher Information Matrix"; final, corrected PRD versio

    MaxEnt power spectrum estimation using the Fourier transform for irregularly sampled data applied to a record of stellar luminosity

    Full text link
    The principle of maximum entropy is applied to the spectral analysis of a data signal with general variance matrix and containing gaps in the record. The role of the entropic regularizer is to prevent one from overestimating structure in the spectrum when faced with imperfect data. Several arguments are presented suggesting that the arbitrary prefactor should not be introduced to the entropy term. The introduction of that factor is not required when a continuous Poisson distribution is used for the amplitude coefficients. We compare the formalism for when the variance of the data is known explicitly to that for when the variance is known only to lie in some finite range. The result of including the entropic measure factor is to suggest a spectrum consistent with the variance of the data which has less structure than that given by the forward transform. An application of the methodology to example data is demonstrated.Comment: 15 pages, 13 figures, 1 table, major revision, final version, Accepted for publication in Astrophysics & Space Scienc

    Modeling the R2* relaxivity of blood at 1.5 Tesla

    Get PDF
    BOLD (Blood Oxygenation Level Dependent) imaging is used in fMRI to show differences in activation of the brain based on the relative changes of the T2* (= 1/R2*) signal of the blood. However, quantification of blood oxygenation level based on the T2* signal has been hindered by the lack of a predictive model which accurately correlates the T2* signal to the oxygenation level of blood. The T2* signal decay in BOLD imaging is generated due to blood containing paramagnetic deoxyhemoglobin (in comparison to diamagnetic oxyhemoglobin). This generates local field inhomogeneities, which cause protons to experience different phase shifts, leading to dephasing and the MR signal decay. The blood T2* signal has been shown to decay with a complex behavior1, termed Non-Lorenztian, and thus is not adequately described by the traditional model of simplemono-exponential decay. Theoretical calculations show that diffusion narrowing substantially affects signal loss in our data. Over the past decade, several theoretical models have been proposed to describe this Non-Lorenztian behavior in the blood T2* signal in BOLD fMRI imaging. The goal of this project was to investigate different models which have been proposed over the years and determine a semi-phenomenological model for the T2* behaviorusing actual MR blood data

    Fitting a sum of exponentials to lattice correlation functions using a non-uniform prior

    Full text link
    Excited states are extracted from lattice correlation functions using a non-uniform prior on the model parameters. Models for both a single exponential and a sum of exponentials are considered, as well as an alternate model for the orthogonalization of the correlation functions. Results from an analysis of torelon and glueball operators indicate the Bayesian methodology compares well with the usual interpretation of effective mass tables produced by a variational procedure. Applications of the methodology are discussed.Comment: 12 pages, 8 figures, 8 tables, major revision, final versio

    Consistent Application of Maximum Entropy to Quantum-Monte-Carlo Data

    Full text link
    Bayesian statistics in the frame of the maximum entropy concept has widely been used for inferential problems, particularly, to infer dynamic properties of strongly correlated fermion systems from Quantum-Monte-Carlo (QMC) imaginary time data. In current applications, however, a consistent treatment of the error-covariance of the QMC data is missing. Here we present a closed Bayesian approach to account consistently for the QMC-data.Comment: 13 pages, RevTeX, 2 uuencoded PostScript figure
    • …
    corecore